18 research outputs found

    Coincidence and coherent data analysis methods for gravitational wave bursts in a network of interferometric detectors

    Full text link
    Network data analysis methods are the only way to properly separate real gravitational wave (GW) transient events from detector noise. They can be divided into two generic classes: the coincidence method and the coherent analysis. The former uses lists of selected events provided by each interferometer belonging to the network and tries to correlate them in time to identify a physical signal. Instead of this binary treatment of detector outputs (signal present or absent), the latter method involves first the merging of the interferometer data and looks for a common pattern, consistent with an assumed GW waveform and a given source location in the sky. The thresholds are only applied later, to validate or not the hypothesis made. As coherent algorithms use a more complete information than coincidence methods, they are expected to provide better detection performances, but at a higher computational cost. An efficient filter must yield a good compromise between a low false alarm rate (hence triggering on data at a manageable rate) and a high detection efficiency. Therefore, the comparison of the two approaches is achieved using so-called Receiving Operating Characteristics (ROC), giving the relationship between the false alarm rate and the detection efficiency for a given method. This paper investigates this question via Monte-Carlo simulations, using the network model developed in a previous article.Comment: Spelling mistake corrected in one author's nam

    Comparison of filters for detecting gravitational wave bursts in interferometric detectors

    Get PDF
    Filters developed in order to detect short bursts of gravitational waves in interferometric detector outputs are compared according to three main points. Conventional Receiver Operating Characteristics (ROC) are first built for all the considered filters and for three typical burst signals. Optimized ROC are shown for a simple pulse signal in order to estimate the best detection efficiency of the filters in the ideal case, while realistic ones obtained with filters working with several ``templates'' show how detection efficiencies can be degraded in a practical implementation. Secondly, estimations of biases and statistical errors on the reconstruction of the time of arrival of pulse-like signals are then given for each filter. Such results are crucial for future coincidence studies between Gravitational Wave detectors but also with neutrino or optical detectors. As most of the filters require a pre-whitening of the detector noise, the sensitivity to a non perfect noise whitening procedure is finally analysed. For this purpose lines of various frequencies and amplitudes are added to a Gaussian white noise and the outputs of the filters are studied in order to monitor the excess of false alarms induced by the lines. The comparison of the performances of the different filters finally show that they are complementary rather than competitive.Comment: 32 pages (14 figures), accepted for publication in Phys. Rev.

    Performance de la modélisation hybride sur un processus de défaillance dans les systèmes industriels

    Get PDF
    Une approche hybride est proposée pour établir le diagnostic d'un moteur électrique. L'approche se caractérise par un assemblage entre la modélisation physique du système réel et la mise en place d'un algorithme d'apprentissage automatique, en vue d'améliorer les performances de diagnostic. Mots-clefs-Modèle hybride, apprentissage automatique, modèle basé sur la connaissance, processus de défaillance, calcul haute performanc

    Reconstruction of source location in a network of gravitational wave interferometric detectors

    Get PDF
    This paper deals with the reconstruction of the direction of a gravitational wave source using the detection made by a network of interferometric detectors, mainly the LIGO and Virgo detectors. We suppose that an event has been seen in coincidence using a filter applied on the three detector data streams. Using the arrival time (and its associated error) of the gravitational signal in each detector, the direction of the source in the sky is computed using a chi^2 minimization technique. For reasonably large signals (SNR>4.5 in all detectors), the mean angular error between the real location and the reconstructed one is about 1 degree. We also investigate the effect of the network geometry assuming the same angular response for all interferometric detectors. It appears that the reconstruction quality is not uniform over the sky and is degraded when the source approaches the plane defined by the three detectors. Adding at least one other detector to the LIGO-Virgo network reduces the blind regions and in the case of 6 detectors, a precision less than 1 degree on the source direction can be reached for 99% of the sky.Comment: Accepted in Phys. Rev.

    Performance de la modélisation hybride sur un processus de défaillance dans les systèmes industriels

    Get PDF
    A non-localised failure on a component can cause irreparable damage but it can also lead to the complete shutdown of the industrial system if it is not detected in time. Indeed, the first step in a failure process is the detection of the fault. Locating the fault is the second step in the process to know at which level of the system to intervene. Numerous methods for diagnosing industrial systems have already proved their worth. They are mainly based on physics-based behaviour laws. However, these behavioural models are generic and present difficulties of adaptation when applied to particular job profiles. Moreover, when dealing with complex systems, the implementation of behavioural laws the coupling of multiple components, interacting with each other with each other, is a laborious and time-consuming task. time-consuming task. The development of industrial systems instrumentation also encourages the use of the potential of real-time data collected on the systems. The problem in studying the data is the transparency of the models created solely from this data. The weight of the interactions present between the system's variables is not always identifiable. This means that the models developed from the data will not be easily transposable from one system to another, guaranteeing the same performance. To improve this adaptability, the idea is to draw on the knowledge of the system in question and to integrated into the modelling. For this purpose, models based on and those based on data learning will be coupled in order to data learning will be coupled in order to study the overall performance of this type of modelling. In the literature, this In the literature, this coupling is called hybrid modelling. To understand the construction process of such a model, the To understand the construction process of such a model, the study proposes to focus on the modelling of a DC electric motor. This application, which is widely studied in the literature, allows us to to exploit existing physical models of the system. The objective of this paper is therefore to study the The objective of this paper is therefore to study the performance of hybrid modelling to diagnose The objective of this paper is therefore to study the performance of hybrid modelling to diagnose the failures of a DC electric motor. To this end, the paper will describe the construction of the data-based model and the theoretical model. model and the theoretical model by discussing the capabilities and the capabilities and limitations of each model. The implementation of a The implementation of a hybrid approach will then be detailed. Finally, the performance of the implemented models will be presented

    An elliptical tiling method to generate a 2-dimensional set of templates for gravitational wave search

    Get PDF
    Searching for a signal depending on unknown parameters in a noisy background with matched filtering techniques always requires an analysis of the data with several templates in parallel in order to ensure a proper match between the filter and the real waveform. The key feature of such an implementation is the design of the filter bank which must be small to limit the computational cost while keeping the detection efficiency as high as possible. This paper presents a geometrical method which allows one to cover the corresponding physical parameter space by a set of ellipses, each of them being associated to a given template. After the description of the main characteristics of the algorithm, the method is applied in the field of gravitational wave (GW) data analysis, for the search of damped sine signals. Such waveforms are expected to be produced during the de-excitation phase of black holes -- the so-called 'ringdown' signals -- and are also encountered in some numerically computed supernova signals.Comment: Accepted in PR

    Detection of a close supernova gravitational wave burst in a network of interferometers, neutrino and optical detectors

    Full text link
    Trying to detect the gravitational wave (GW) signal emitted by a type II supernova is a main challenge for the GW community. Indeed, the corresponding waveform is not accurately modeled as the supernova physics is very complex; in addition, all the existing numerical simulations agree on the weakness of the GW emission, thus restraining the number of sources potentially detectable. Consequently, triggering the GW signal with a confidence level high enough to conclude directly to a detection is very difficult, even with the use of a network of interferometric detectors. On the other hand, one can hope to take benefit from the neutrino and optical emissions associated to the supernova explosion, in order to discover and study GW radiation in an event already detected independently. This article aims at presenting some realistic scenarios for the search of the supernova GW bursts, based on the present knowledge of the emitted signals and on the results of network data analysis simulations. Both the direct search and the confirmation of the supernova event are considered. In addition, some physical studies following the discovery of a supernova GW emission are also mentioned: from the absolute neutrino mass to the supernova physics or the black hole signature, the potential spectrum of discoveries is wide.Comment: Revised version, accepted for publication in Astroparticle Physic

    Contrôle longitudinal et caractérisation optique du détecteur Virgo

    No full text
    The Virgo detector aims at direct detection of the gravitational waves emitted by astropysical objects. Essentially, it is a Michelson interferometer with arms constituted by 3 km long Fabry-Perot cavities which use the recycling technique. To reach the required sensitivity, the instrument must be maintained at its working point using both angular and longitudinal controls. This thesis covers my work on the algorithm implemented in the Global Control to control the four characteristics lengths of the interferometer within a few nanometers, as part of locking process.To achieves this result, we use the Pound-Drever technique, which provides for an optical cavity a signal sensitive to its length compared to the resonance position. Two algorithms has been tested. The first one is inspired by the algorithm developped by LIGO. We control sequentially the different lengths of Virgo. But, the difference between Virgo and LIGO has been the reason of the failure of this algorithm. The other algorithm brings simulatneously the four lengths on the half fringe of the Michelson interferometer. Our deterministic algorithm brings the instrument to its working point in a few minutes. Another part of this work deals with in situ measurements of optical parameters critical to an interpretation of the behaviour of the instrument. These allowed us to tune the Virgo simulation and prepare the lock acquisition algorithm.Finally, we will interest on the coupling between the angular control and the longitudinal control techniques. We show the mechanism and we evaluate his impact on the locking of Virgo.Le détecteur Virgo est constitué d'un interféromètre de Michelson avec des cavités Fabry-Perot de 3 km de long dans les bras et utilise la technique de recyclage de puissance. Il a pour but la détection directe des ondes gravitationnelles émises par des sources astrophysiques. Pour atteindre sa sensibilité, Virgo doit être emmené à son point de fonctionnement par des asservissements tant longitudinaux qu'angulaires. Pour cela, nous avons mis en place un algorithme de contrôle longitudinal ("lock") qui partant d'un interféromètre libre l'emmène à son point de fonctionnement. Pour arriver à ce résultat, nous utilisons la technique de Pound-Drever qui nous permet d'avoir un signal sensible à la variation de la position d'une cavité optique par rapport à la résonance. Nous avons developpé deux algorithmes. Le premier s'inspire de celui utilisé par la collaboration LIGO. Nous arrivons au point de fonctionnement en contrôlant successivement les quatres longueurs caractéristiques de Virgo. L'application de cet algorithme sur l'instrument s'est soldé par un échec dont les causes sont liées aux différences entre Virgo et LIGO. Le deuxième algorithme nous permet de contrôler simultanément ces quatres longueurs en étant sur la mi frange de l'interféromètre de Michelson. Nous emmenons ensuite l'interféromètre en quelques minutes à son point de fonctionnement de manière déterministe. Une autre partie de la thèse consiste en la mesure in situ des paramètres optiques nécessaires à la compréhension de l'instrument. Ceci nous a permis à la fois de faire accorder la simulation avec les données et de préparer l'algorithme d'acquisition du lock de Virgo. Enfin, nous nous intéressons à l'impact de la technique d'Anderson utilisée pour le contrôle angulaire des miroirs sur le contrôle longitudinal des cavités optiques. Nous en montrons le mécanisme et évaluons son impact sur le lock de Virgo

    Hybrid Energy Storage System with Unique Power Electronic Interface for Microgrids

    No full text
    ISBN: 978-1-4799-2984-9International audienceThe increasing penetration of Distributed Generation systems based on Renewable Energy Sources is introducing new challenges in the current centralized electric grid. Microgrids are one of the most useful and efficient ways to integrate the renewable energy technologies. As the stability of a microgrid is highly sensitive, an energy storage system is essential and it must satisfy two criteria: to have a high storage capacity and to be able to supply fast power variations. In order to satisfy these two constraints, this paper proposes the association of a Vanadium Redox Battery (VRB) and Supercapacitor (SC) bank in a Hybrid Energy Storage System (HESS). A Three-level Neutral Point Clamped (3LNPC) Inverter is used as a unique interface between the HESS and the microgrid. The paper focuses on the dynamic modelling and validation of the HESS, the power division and the modulation strategies used with the 3LNPC to mitigate the neutral point voltage unbalance effect
    corecore